ShiftDelete.Net Global

ChatGPT adds mental health guardrails

Ana sayfa / News

ChatGPT adds mental health guardrails as OpenAI responds to concerns over the chatbot reinforcing delusions. The update introduces new safeguards and usage reminders designed to better protect vulnerable users.

OpenAI is now working closely with mental health specialists to help ChatGPT recognize signs of emotional crisis. Earlier reports revealed that older versions sometimes validated harmful beliefs instead of questioning them. With the update, ChatGPT can identify distress signals and point users toward credible resources when the conversation turns heavy.

ChatGPT 5 is poised to shake things up

OpenAI is preparing to unveil its GPT-5 model in August. This move could shift the balance of power in the field of artificial intelligence.

Another new measure comes in the form of gentle reminders for people engaged in long sessions. After extended chats, the system will now prompt: “You’ve been chatting a while maybe it’s time for a break?” with the option to pause or continue. This mirrors practices seen on platforms like TikTok, YouTube, and Instagram, which nudge users to step back after long stretches of use.

Earlier in the year, an update made GPT‑4o overly agreeable. The chatbot often reinforced irrational thoughts, including delusional beliefs. In one case, it validated a user’s psychosis, which later required hospitalization. Critics said the bot’s “yes‑man” tone risked amplifying harm. That misstep pushed OpenAI to roll back the update and rethink how ChatGPT handles emotionally charged exchanges.

The improved system now offers a set of practical safeguards:

These steps aim to shift ChatGPT away from giving blunt answers and toward responses that encourage reflection.

The company also plans to adjust how ChatGPT handles sensitive personal questions, like those involving relationships or major life decisions. Instead of issuing a direct command such as “end the relationship,” the system will ask reflective questions and explore options with the user. OpenAI says the goal is not endless engagement but meaningful support.

As usage climbs toward 700 million people weekly, OpenAI is under pressure to keep ChatGPT both helpful and safe. The latest guardrails show a shift in tone: the chatbot is learning when to step back, when to prompt a pause, and when to simply say less. Fast answers might grab attention, but careful ones could earn trust.

Yorum Ekleyin